Goto

Collaborating Authors

 ai plan


Trump's AI plan is a bulwark against the rising threat from China

FOX News

In July, some of the brightest minds in American technology descended on Washington to celebrate a major milestone: the launch of President Donald Trump's bold initiative to ensure the United States remains the world's unrivaled leader in artificial intelligence (AI). Let me be blunt: the AI arms race is no longer theoretical. And we cannot afford to come in second place. In business, if you don't constantly adapt and innovate, you lose. If we fail to lead in AI, we risk surrendering our economic and national security edge to the Chinese Communist Party (CCP) -- a regime that seeks to challenge American technological supremacy and reshape the global order in its authoritarian image.


House of Lords pushes back against government's AI plans

The Guardian

The vote came days after hundreds of artists and organisations including Paul McCartney, Jeanette Winterson, Dua Lipa and the Royal Shakespeare Company urged the prime minister not to "give our work away at the behest of a handful of powerful overseas tech companies". The amendment was tabled by crossbench peer Beeban Kidron and was passed by 272 votes to 125. The bill will now return to the House of Commons. If the government removes the Kidron amendment, it will set the scene for another confrontation in the Lords next week. Lady Kidron said: "I want to reject the notion that those of us who are against government plans are against technology.


Performing arts leaders issue copyright warning over UK government's AI plans

The Guardian

More than 30 performing arts leaders in the UK, including the bosses of the National Theatre, Opera North and the Royal Albert Hall, have joined the chorus of creative industry concern about the government's plans to let artificial intelligence companies use artists' work without permission. They also urged the government to support the "moral and economic rights" of the creative community in music, dance, drama and opera. The 35 signatories of the statement include the chief executives of the Sadler's Wells dance theatre, the Royal Shakespeare Company, the City of Birmingham Symphony Orchestra and the Leeds Playhouse. The performing arts bosses added that they embraced advances in technology and were "participants" in innovation, but stated the government's plans risked undermining their ability to participate in the development and deployment of AI. Critics of the opt out plan have described it as unfair and impractical.


The Download: our relationships with robots, and DOGE's AI plans

MIT Technology Review

Since the 1970s, we've sent a lot of big things to Mars. But when NASA successfully sent twin Mars Cube One spacecraft, the size of cereal boxes, in November 2018, it was the first time we'd ever sent something so small. Just making it this far heralded a new age in space exploration. NASA and the community of planetary science researchers caught a glimpse of a future long sought: a pathway to much more affordable space exploration using smaller, cheaper spacecraft. RIP Roberta Flack, one of the realest to ever do it.


It's Time to Worry About DOGE's AI Plans

The Atlantic - Technology

Donald Trump and Elon Musk's chaotic approach to reform is upending government operations. Critical functions have been halted, tens of thousands of federal staffers are being encouraged to resign, and congressional mandates are being disregarded. The next phase: The Department of Government Efficiency reportedly wants to use AI to cut costs. According to The Washington Post, Musk's group has started to run sensitive data from government systems through AI programs to analyze spending and determine what could be pruned. This may lead to the elimination of human jobs in favor of automation.


Rishi Sunak's AI plan has no teeth – and once again, big tech is ready to exploit that Georg Riekeles and Max von Thun

The Guardian

This month, the British prime minister, Rishi Sunak, convened government representatives, AI companies and experts at Bletchley Park – the historic home of Allied code-breaking during the second world war – to discuss how the much-hyped technology can be deployed safely. The summit has been rightly criticised on a number of grounds, including prioritising input from big tech over civil society voices, and fixating on far-fetched existential risks over tangible everyday harms. But the summit's biggest failure – itself a direct result of those biases – was that it had nothing meaningful to say about reining in the dominant corporations that pose the biggest threat to our safety. The summit's key "achievements" consisted of a vague joint communique warning of the risks from so-called frontier AI models and calling for "inclusive global dialogue" plus an (entirely voluntary) agreement between governments and large AI companies on safety testing. Yet neither of these measures have any real teeth, and what's worse, they give powerful corporations a privileged seat at the table when it comes to shaping the debate on AI regulation.


Pentagon's AI plan must include offense and defense under House-passed bill: 'DOD has to catch up'

FOX News

AGI, while powerful, could have negative consequences, warned Diveplane CEO Mike Capps and Liberty Blockchain CCO Christopher Alexander. The House last week passed a defense policy bill that strongly encourages the Pentagon to use artificial intelligence to its advantage, but also requires defense officials to examine how America's national security infrastructure may be vulnerable to AI systems deployed by China, Russia and other adversaries. Rep. Marc Molinaro, R-N.Y., pushed to include language in the bill requiring an assessment of AI vulnerabilities, and watched it pass easily on the House floor. That's a strong sign the language will remain in the final bill even after a negotiation with the Senate, and Molinaro told Fox News Digital that this assessment is needed in the face of ever-evolving AI capabilities. "The average person knows at least the rudimentary use of AI. China, terrorists, Russia are using AI in a much more sophisticated way, certainly as aggressors," he told Fox news Digital.


The Download: lab-grown meat's climate impact, and Congress' AI plans

MIT Technology Review

Soon, the menu in your favorite burger joint could include not only options made with meat, mushrooms, and black beans but also patties packed with lab-grown animal cells. Not only did the US just approve the sale of cultivated meat for the first time, but the industry is in the process of raising billions of dollars to bring its products to restaurants and grocery stores. In theory, that should be a big win for the climate--greenhouse-gas emissions from the animals we eat account for nearly 15% of the global total. But whether cultivated meat really is better for the environment is still not entirely clear. Two weeks ago, Senate majority leader Chuck Schumer announced his grand strategy for AI policymaking at a speech in Washington, DC, ushering in what might be a new era for US tech policy.


The Download: ChatGPT's impact on schools, and Elon Musk's AI plans

MIT Technology Review

This year millions of people have tried--and been wowed by-- artificial intelligence systems. When it launched last November, the chatbot became an instant hit among students, many of whom started using it to write essays and homework. Alarmed by an influx of AI-generated essays, schools around the world moved swiftly to ban the use of the technology. But there's an unexpected upside: ChatGPT has forced schools to quickly adapt and start teaching kids an ad hoc curriculum of AI 101. The big hope is that educators and policymakers will realize just how important it is to teach the next generation critical thinking skills around AI. Read the full story.


The Download: Driverless cars' AI plan, and stretching cells with a robotic shoulder

MIT Technology Review

Why you don't really know what you know October 2020 What does it really mean to know anything? How well can we understand the world when so much of our knowledge relies on evidence and argument provided by others? These questions matter not only to scientists. Many other fields are becoming more complex, and we have access to far more information and informed opinions than ever before. Yet at the same time, increasing political polarization and misinformation are making it hard to know whom or what to trust.